-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement core btc handling logic #70
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice one! I left some comments on where I think some edge cases can be modelled in more detail.
@@ -26,6 +26,17 @@ message SubmissionKey { | |||
repeated TransactionKey key = 1; | |||
} | |||
|
|||
enum RawCheckpointStatus { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends on the way you look at it but I think the raw checkpoint has no status, it's a value type rather than an entity in DDD terms, at least once it is sealed. Even the original status is called just CheckpointStatus
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, ultimatly I use it rather as epoch state than checkpoint status.
func (h Hooks) AfterBTCRollBack(ctx sdk.Context, headerInfo *ltypes.BTCHeaderInfo) { | ||
h.k.OnTipChange(ctx) | ||
} | ||
|
||
func (h Hooks) AfterBTCRollForward(ctx sdk.Context, headerInfo *ltypes.BTCHeaderInfo) { | ||
h.k.OnTipChange(ctx) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, this is what we discussed at length about using events vs using queries with @vitsalis. We emit events that, with the right bookkeeping could be used to decide the status of the submissions without doing any queries against the light client, but the option to do just queries and ignore the events is also there. Both are fine; events are more efficient more complex to code (need to track the status of individual submissions), while queries are simple but lead to more wasted computation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So basically btccheckpoint could build view of the light client main chain ? Sounds like a good optimization ! I would start here with queries, and do this a bit later when we will have already working system (with more tests etc.).Imo at this point some wasted is not a big deal.
x/btccheckpoint/keeper/keeper.go
Outdated
if k.CheckSubmissionFinalizedNoErr(ctx, sk) { | ||
ed.Status = types.Finalized | ||
k.saveFinalizedIndex(ctx, sk) | ||
k.checkpointingKeeper.SetCheckpointFinalized(ed.RawCheckpoint) | ||
} else if k.CheckSubmissionConfirmedNoErr(ctx, sk) { | ||
ed.Status = types.Confirmed | ||
k.saveConfirmedIndex(ctx, sk) | ||
k.checkpointingKeeper.SetCheckpointConfirmed(ed.RawCheckpoint) | ||
} else { | ||
k.saveUnconfirmedIndex(ctx, sk) | ||
k.checkpointingKeeper.SetCheckpointSubmitted(ed.RawCheckpoint) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure setting the checkpoint state should necessarily be here, in the submission. Here's why:
- It allows submissions to arrive e.g. when the BTC block they are in is already more than k/w-deep. If it's already w-deep, it will set the status to
Finalized
and skip calling thesaveConfirmedIndex
andsaveUnconfirmedIndex
altogether. Not sure if anything relies on those, but if it was done differently, it could make sure all intermediate statuses are observed. - It sets the status to confirmed when the first transaction arrives that is k-deep. But there can be multiple such transactions in that block. If they are sent before the block is k-deep, we register both submissions and pick the winner later. If they are sent at or after k-deep, it's first-come-first-served.
In the sequence diagrams I let the collection of submissions to go on as long as the status allows, and use the BTC light client events only to trigger status changes and rewards. It doesn't seem to have these edge cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm first point is easy to fix but second seems like something not such streight forwards.
Ultimately only drawback of doing it later (during events callbacks) is a bit of increased confirmation latency as we need to wait for another tip change. We can start this way and see if this becomes a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure you will never see this becoming a problem in real life, it's an edge case that would only happen if transactions are received with a ~1 hour delay and in the same Babylon block. It's more about the principle.
You are right that the way I described only the next block would tip these checkpoints into the next status, but again this is the edge case where we receive submissions more than 1 hour late, so again it won't happen, but in return you have 1 place to look at when you look for status changes.
A third alternative is to do it in the EndBlocker: if a BTC block has been received in this block, then check the status in the end, but not in each individual transaction.
Both of these avoid having to check for statuses from two different kinds of transactions, and also the weird behaviour when transaction ordering matters, where it usually does not.
// Callback to be called when btc light client tip change | ||
func (k Keeper) OnTipChange(ctx sdk.Context) { | ||
k.checkUnconfirmed(ctx) | ||
k.checkConfirmed(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another thing to check would be the regression from Submitted
to Signed
or Unsubmitted
, which happens if all submissions end up being on non-mainchain forks, whereas before at least one on the main chain.
This can be used to signal to submitters that they have to re-submit to BTC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, I plan to add this check in checkUnconfirmed
and then use SetCheckpointForgotten
to inform checkpointing module about it.
x/btccheckpoint/keeper/keeper.go
Outdated
store.Set(uk, v) | ||
} | ||
|
||
func (k Keeper) saveConfirmedIndex(ctx sdk.Context, sk types.SubmissionKey) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find the names of these methods, saveConfirmedIndex
, saveFinalizedIndex
unintuitive. First I thought they might be saving some kind of height/index at which the epoch was confirmed. Only after seeing the "promote" methods did I start to understand that this is about putting the submission key into confirmed/finalized collections, so that they can be iterated later.
Maybe something like addToConfirmed
?
// MainChainDepth return depth of given header hash inside its chain, second | ||
// parameter indicates if given header is part of the main i.e true means | ||
// it is part of main chain, false that it is part of fork. | ||
// Depth 0, means given header is the best known header of its chain. | ||
// Error is returned if header hash is unknown to btc light client | ||
ChainDepth(ctx sdk.Context, headerBytes *btypes.BTCHeaderHashBytes) (uint64, bool, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently we have a method that is like MainChainDepth
returning -1 if the block is not on the mainchain. Would that suffice? It's not as straight forward to tell the depth of any block because unlike with the mainchain we don't have a tip to start iterating from, we would have to check any descendant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh I do not like using -1
as some special case here. Imo a bit more idiomatic is to add doc to method mentioning that if headers is not on main chain then depth has no meaning. So in case of header not on main chain light client can return something like (0, false, nil)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, maybe we can name this function MainChainDepth
since it only returns the depth of main chain headers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Furthermore, in this case, instead of using -1
as a special case, we are using an extra boolean variable, which only becomes useful when the result is 0
(since 0 is a valid height). In all other cases, the check height != 0
would suffice for the check. Not entirely convinced that using an extra return is more useful than having -1
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I will rename it to MainChainDepth
.
As for -1
vs additional return, from my point of view both are less that ideal, and we are forced to use them due to deficiency of go type system. In other language we could use some kind of sum type to model three possible results: Header is known and on the main chain at depth n
or Header is known but on the fork or header is invalid/unknown.
Given that both approaches are not ideal, lets stick with current lightclient approach to avoid code churn, I will adapt to it on my side
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree about the type system. Don't have a any opinion about whether it should be a flag or not. Probably a flag is more idiomatic in Go, like it does with Maps, when you can go value, ok := map[key]
or however it works. Three results seem like a mouthful but if this method doesn't return errors, a flag could be a good way to raise awareness of the possibility that the result might have to be ignored.
// SetCheckpointForgotten Informs checkpoining module thaht this checkpoint lost | ||
// all submissions on btc chain | ||
SetCheckpointForgotten(rawCheckpoint []byte) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this going to be emitted when all submissions end up being orphaned? I mean, when they are not forgotten, just not exactly count as submitted in the sense that if BTC keeps growing on the longest chain we have now, they will not be confirmed?
Or is it for pruning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My plan is to use it when checkkpoint will lose all btc submissions. So that checkpointing module will known that once submitted checkpoint is no longer known to be on btc chain. Although I want to tackle it in next pr, because it requires a bit more work to remove forgotten checkpoints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I'm still not sure what you mean exactly. Say I have this chain:
B0 - B1
And there's a checkpoint submission in B1, but suddenly I get another block:
B0 - B1
\
B1'
Say B1'
has more work than B1
and the tip changed. I have not actually forgotten that there is a submission in B1
, it might become the main chain again if someone built B2
. But currently it's not, and the checkpoint has to be submitted again, unless it's already in B1'
.
So, I would not remove any forgotten checkpoints here, just set its status back to Signed
to indicate that it's as if it hasn't been submitted. If someone is building on B1'
, it needs to be submitted again, if they build on B1
, then it doesn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this exactly what I plan ultimately to do. I won't remove checkpoint submission but only change state and inform checkpointing module about it (by SetCheckpointForgotten
call back although it could be renamed to SetCheckpointSigned
). sorry for confusion here
x/btccheckpoint/types/keys.go
Outdated
UnconfirmedIndexPrefix = []byte{0, 0, 0, 0, 0, 0} | ||
ConfirmedIndexPrefix = []byte{1, 1, 1, 1, 1, 1} | ||
FinalizedIndexPrefix = []byte{2, 2, 2, 2, 2, 2} | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 6 long?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh it is a bit arbitrary. I wanted it to be long enough to avoid situation that prefix of some hash will end with this prefix only by chance. 6 bytes seems like good value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't everything prefixed with these? If you just use 0
, 1
and 2
, can't you still rely on them being equally long, without any hash overlapping with these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but submission keys in mapping submissionKey -> submissionData
are not prefixed with any thing, so they could easily start with 0
, 1
or whatever byte.
Although thinking about it now maybe it is just easier to add another prefix just for this mapping.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ultimately I have added prefixes everywhere to avoid such considerations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I didn't check that part, I thought you're using the pattern the others have adopted with stuff like whatever_store := store.PrefixStore(WhateverPrefix)
(I can't remember the exact syntax), and these three are all inside one of these collections. Not that they are mixed together with heterogeneous data.
x/btccheckpoint/types/types.go
Outdated
func NewEmptyEpochData(rawCheckpointBytes []byte) EpochData { | ||
return EpochData{ | ||
Key: []*SubmissionKey{}, | ||
Status: Submitted, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reckon it might be worth having an explicit status that precedes this, like the Signed
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I will add it in this pr, but I would add handling of moving from Submitted
to Signed
in next pr as it is a bit tricky.
Does this module emit events already, for example when a submission is admitted? |
As for events, the answer is not yet, I want to add emitting events in separate pr after I have whole logic of handling submissions done. |
x/btccheckpoint/keeper/keeper.go
Outdated
if err == nil { | ||
return true | ||
} else { | ||
return false | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if err == nil { | |
return true | |
} else { | |
return false | |
} | |
return err == nil |
x/btccheckpoint/keeper/keeper.go
Outdated
} | ||
} | ||
|
||
func (k Keeper) IsAncestor(ctx sdk.Context, parentHash btypes.BTCHeaderHashBytes, childHash btypes.BTCHeaderHashBytes) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the btclightclient
module I usually refer to btypes
as bbl
. Maybe we can have a unified way to refer to those for ease of reading. No issue with using btypes
as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure can use bbl
, no strong opinions on my side here :)
x/btccheckpoint/keeper/keeper.go
Outdated
var onMain bool = true | ||
var allAtLeastNDeep = true | ||
for _, tk := range sk.Key { | ||
depth, onMainChain, e := k.btcLightClientKeeper.ChainDepth(ctx, tk.Hash) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense for the ChainDepth
method to return a depth if the header is not on the mainchain? How would we decide on the depth (i.e. if it's not an ancestor of the tip, from where do we start counting?) and would such a depth be useful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in theory one could have mulitple tips, some better than others, but if its complicates implementation it is not really needed. I only care for depth on the best chain or information if header is known. I expect errors if some header is unknown or invalid for some reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. However, there can only be one tip. You can have multiple headers on the same height, but the tip is the one that has the most cumulative work (sum of its work and its parents')
// MainChainDepth return depth of given header hash inside its chain, second | ||
// parameter indicates if given header is part of the main i.e true means | ||
// it is part of main chain, false that it is part of fork. | ||
// Depth 0, means given header is the best known header of its chain. | ||
// Error is returned if header hash is unknown to btc light client | ||
ChainDepth(ctx sdk.Context, headerBytes *btypes.BTCHeaderHashBytes) (uint64, bool, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, maybe we can name this function MainChainDepth
since it only returns the depth of main chain headers?
// MainChainDepth return depth of given header hash inside its chain, second | ||
// parameter indicates if given header is part of the main i.e true means | ||
// it is part of main chain, false that it is part of fork. | ||
// Depth 0, means given header is the best known header of its chain. | ||
// Error is returned if header hash is unknown to btc light client | ||
ChainDepth(ctx sdk.Context, headerBytes *btypes.BTCHeaderHashBytes) (uint64, bool, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Furthermore, in this case, instead of using -1
as a special case, we are using an extra boolean variable, which only becomes useful when the result is 0
(since 0 is a valid height). In all other cases, the check height != 0
would suffice for the check. Not entirely convinced that using an extra return is more useful than having -1
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fairly sure we have discussed everything 🙂 👍
Like I said I'd try to decouple the status from the submission processing either till the end block or to the next submission, but if you want to go with this that's fine as well.
|
||
// SetCheckpointForgotten Informs checkpoining module thaht this checkpoint lost | ||
// all submissions on btc chain | ||
SetCheckpointForgotten(rawCheckpoint []byte) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this keeper is needed. I mean I don't think the checkpoint module needs to be aware of orphaned checkpoints.
My understanding of the reason for keeping this keeper is to tell the vigilante to submit the checkpoint again, but can't this be decided by the vigilant itself? I mean the forgotten checkpoint can only be SUBMITTED, if I understand it correctly. If so, the vigilante can set a timer to resubmit checkpoints that have not been CONFIRMED.
Is there any other particular reason to keep this keeper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean from point of view of btccheckpoint what checkpointing will do with this info is not really that important, but from pov of the whole system it seems important to relay the information that checkpoint which we thought was embedded into btc main chain is no longer is.
Also I would not relay so much on what would vigilante do i.e if there will not be enough vigilates as they can all die/lose state/lose network connection etc. Imo node full node should be ultimate source of true.
Another argument is purely theoretical and a bit contrived but imo compelling as from what I understand after I call SetCheckPointSubmitted
checkpoint is considered Submitted
by checkpointing module, so if there won't be
SetCheckpointForgotten
it can come to situation that btccheckpoint will have other state (Signed
) that checkpointing (Submitted
) which is quite a bit discrepancy is some one would query both modules for checkpoint state would get different answer from each module. Basically when checkpointing module would respond to queries that checkpoint is submitted onto btc, when it would not, it would lie to its callers. Probably now it does not have big repercussions, but system wise it seems like big inconsistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your clarification. It makes sense. Let's keep this keeper for now. No blocker from my side 👏.
888864e
to
4930a96
Compare
Implements checks for
k-deep
andw-deep
i.e if epoch is confirmed and finalized.For now It assmes that there are is no possiblity of going backwards i.e once epoch become confirmed/finalized. Not sure this will ultimatetly be the case but it seems like a good start.
Things left to implement in btccheckpoint after this pr:
@gitferry , @vitsalis please review
expected_keepers
if this are the interfaces you would like to expose from your modules. (if not comments appreacieated)